Optimization Strategies and Backpropagation Neural Networks
نویسنده
چکیده
Adaptive learning rate algorithms try to decrease the error at each iteration by searching a local minimum with small weight steps, which are usually constrained by highly problemdependent heuristic learning parameters. Based on the idea of the decrease of the error function at each iteration we suggest monotone learning strategies that guarantee convergence to a minimizer of the error function without using highly problem-dependent heuristics. Furthermore, we introduce the idea of nonmonotone learning that provides fast, stable and reliable training and we test both approaches on simulation experiments.
منابع مشابه
HYBRID ARTIFICIAL NEURAL NETWORKS BASED ON ACO-RPROP FOR GENERATING MULTIPLE SPECTRUM-COMPATIBLE ARTIFICIAL EARTHQUAKE RECORDS FOR SPECIFIED SITE GEOLOGY
The main objective of this paper is to use ant optimized neural networks to generate artificial earthquake records. In this regard, training accelerograms selected according to the site geology of recorder station and Wavelet Packet Transform (WPT) used to decompose these records. Then Artificial Neural Networks (ANN) optimized with Ant Colony Optimization and resilient Backpropagation algorith...
متن کاملTraining Artificial Neural Networks: Backpropagation via Nonlinear Optimization
In this paper we explore different strategies to guide backpropagation algorithm used for training artificial neural networks. Two different variants of steepest descent-based backpropagation algorithm, and four different variants of conjugate gradient algorithm are tested. The variants differ whether or not the time component is used, and whether or not additional gradient information is utili...
متن کاملInvestigation of Mechanical Properties of Self Compacting Polymeric Concrete with Backpropagation Network
Acrylic polymer that is highly stable against chemicals and is a good choice when concrete is subject to chemical attack. In this study, self-compacting concrete (SCC) made using acrylic polymer, nanosilica and microsilica has been investigated. The results of experimental testing showed that the addition of microsilica and acrylic polymer decreased the tensile, compressive and bending strength...
متن کاملBeyond Backpropagation: Using Simulated Annealing for Training Neural Networks
The vast majority of neural network research relies on a gradient algorithm, typically a variation of backpropagation, to obtain the weights of the model. Because of the enigmatic nature of complex nonlinear optimization problems, such as training artificial neural networks, this technique has often produced inconsistent and unpredictable results. To go beyond backpropagation’s typical selectio...
متن کاملConvexified Convolutional Neural Networks
We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For lear...
متن کاملRecurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation
Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999